nohu. tv
go88 – link tải go88 apk  ios mới nhất – cổng game bài đổi thưởng go88 club
ae88 ae888
řetízek na nohu stříbrný
nohu. tv
go88 – link tải go88 apk  ios mới nhất – cổng game bài đổi thưởng go88 club
ae88 ae888
řetízek na nohu stříbrný

qh88 vet 100k

$6

I am working with “magnum-v2-123b-Q4_K_L” model (I also tried “magnum-v2-123b-iQ4_K_M” - no differe

Quantity
Add to wish list
Product description



  I am working with “magnum-v2-123b-Q4_K_L” model (I also tried “magnum-v2-123b-iQ4_K_M” - no difference). I've noticed that the context shift mechanism with this model works somehow wrong, if not to say it doesn't work at all. When I work with the “Llama3.1-70B_Q5_K_M” model with a context size of 16k and a response window of 1k tokens, I can delete even the last few replicas without having to recalculate the entire context. You could work for hours and not need to recalculate the context at all. But it's different with the Mistral Large. There, the context is often recalculated completely even when a new replica is added. Sometimes context shifting works - I haven't caught the pattern yet. This is very uncomfortable, especially when compared to the “Llama3.1-70B” model. Maybe there is a solution to the problem?

Related products